21 research outputs found

    Biogeography-based Optimization in Noisy Environments

    Get PDF
    Biogeography-based optimization (BBO) is a new evolutionary optimization algorithm that is based on the science of biogeography. In this paper, BBO is applied to the optimization of problems in which the fitness function is corrupted by random noise. Noise interferes with the BBO immigration rate and emigration rate, and adversely affects optimization performance. We analyse the effect of noise on BBO using a Markov model. We also incorporate re-sampling in BBO, which samples the fitness of each candidate solution several times and calculates the average to alleviate the effects of noise. BBO performance on noisy benchmark functions is compared with particle swarm optimization (PSO), differential evolution (DE), self-adaptive DE (SaDE) and PSO with constriction (CPSO). The results show that SaDE performs best and BBO performs second best. In addition, BBO with re-sampling is compared with Kalman filter-based BBO (KBBO). The results show that BBO with re-sampling achieves almost the same performance as KBBO but consumes less computational tim

    Hybrid biogeography-based evolutionary algorithms

    Get PDF
    Hybrid evolutionary algorithms (EAs) are effective optimization methods that combine multiple EAs. We propose several hybrid EAs by combining some recently-developed EAs with a biogeography-based hybridization strategy. We test our hybrid EAs on the continuous optimization benchmarks from the 2013 Congress on Evolutionary Computation (CEC) and on some real-world traveling salesman problems. The new hybrid EAs include two approaches to hybridization: (1) iteration-level hybridization, in which various EAs and BBO are executed in sequence; and (2) algorithm-level hybridization, which runs various EAs independently and then exchanges information between them using ideas from biogeography. Our empirical study shows that the new hybrid EAs significantly outperforms their constituent algorithms with the selected tuning parameters and generation limits, and algorithm-level hybridization is generally better than iteration-level hybridization. Results also show that the best new hybrid algorithm in this paper is competitive with the algorithms from the 2013 CEC competition. In addition, we show that the new hybrid EAs are generally robust to tuning parameters. In summary, the contribution of this paper is the introduction of biogeography-based hybridization strategies to the EA community

    Combining deep neural network with traditional classifier to recognize facial expressions

    Get PDF
    Facial expressions are important in people's daily communications. Recognising facial expressions also has many important applications in the areas such as healthcare and e-learning. Existing facial expression recognition systems have problems such as background interference. Furthermore, systems using traditional approaches like SVM (Support Vector Machine) have weakness in dealing with unseen images. Systems using deep neural network have problems such as requirement for GPU, longer training time and requirement for large memory. To overcome the shortcomings of pure deep neural network and traditional facial recognition approaches, this paper presents a new facial expression recognition approach which has image pre-processing techniques to remove unnecessary background information and combines deep neural network ResNet50 and a traditional classifier-the multiclass model for Support Vector Machine to recognise facial expressions. The proposed approach has better recognition accuracy than traditional approaches like Support Vector Machine and doesn't need GPU. We have compared 3 proposed frameworks with a traditional SVM approach against the Karolinska Directed Emotional Faces (KDEF) Database, the Japanese Female Facial Expression (JAFFE) Database and the extended Cohn-Kanade dataset (CK+), respectively. The experiment results show that the features extracted from the layer 49Relu have the best performance for these three datasets

    Stereo vision-based autonomous navigation for oil and gas pressure vessel inspection using a low-cost UAV

    Get PDF
    It is vital to visually inspect pressure vessels regularly in the oil and gas company to maintain their integrity. Compared with visual inspection conducted by sending engineers and ground vehicles into the pressure vessel, utilising an autonomous Unmanned Aerial Vehicle (UAV) can overcome many limitations including high labour intensity, low efficiency and high risk to human health. This work focuses on enhancing some existing technologies to support low-cost UAV autonomous navigation for visual inspection of oil and gas pressure vessels. The UAV can gain the ability to follow the planned trajectory autonomously to record videos with a stereo camera in the pressure vessel, which is a GPS-denied and low-illumination environment. Particularly, the ORB-SLAM3 is improved by adopting the image contrast enhancement technique to locate the UAV in this challenging scenario. What is more, a vision hybrid Proportional-Proportional-Integral-Derivative (P-PID) position tracking controller is integrated to control the movement of the UAV. The ROS-Gazebo-PX4 simulator is customised deeply to validate the developed stereo vision-based autonomous navigation approach. It is verified that compared with the ORB-SLAM3, the numbers of ORB feature points and effective matching points obtained by the improved ORB-SLAM3 are increased by more than 400% and 600%, respectively. Thereby, the improved ORB-SLAM3 is effective and robust enough for UAV self-localisation, and the developed stereo vision-based autonomous navigation approach can be deployed for pressure vessel visual inspectio

    A robust learned feature-based visual odometry system for UAV pose estimation in challenging indoor environments

    Get PDF
    Unmanned Aerial Vehicles (UAVs) are becoming popular nowadays due to their versatility and flexibility for indoor applications, such as the autonomous visual inspection for the inner surface of a pressure vessel. Nevertheless, robust and reliable position estimation is critical for completing these tasks. Visual Odometry (VO) and Visual Simultaneous Localisation and Mapping (VSLAM) allow the UAV to estimate its position in unknown environments. However, traditional feature-based VO/VSLAM systems struggle to deal with complex scenes such as low illumination and textureless environments. Replacing the traditional features with deep learning-based features provides the advantage of handling the challenging environment, but the efficiency is ignored. In this work, an efficient VO system based on a novel lightweight feature extraction network for UAV onboard platforms has been developed. The Deformable Convolution (DFConv) is utilised to improve the feature extraction capability. Owing to the limited onboard computing capability, the Depth-wise Separable Convolution (DWConv) is adopted to calculate the offsets for the deformable convolution and construct the backbone network to improve the feature extraction efficiency. Experiments on public datasets indicate that the efficiency of the VO system is improved by 30.03% while preserving the accuracy on embedded platforms with the feature points and descriptors detected by the proposed Convolutional Neural Network (CNN). Moreover, the proposed VO system is verified through UAV flying tests in a real-world scenario. The results prove that the proposed VO system is able to handle the challenging environments where both the latest traditional and deep learning feature-based VO/VSLAM systems fail, and it is feasible for UAV self-localisation and autonomous navigation in the confined, low illumination and textureless indoor environment

    Zero-valent iron-copper bimetallic catalyst supported on graphite from spent lithium-ion battery anodes and mill scale waste for the degradation of 4-chlorophenol in aqueous phase

    Full text link
    Graphite supported zero-valent iron-copper bimetallic catalysts (ZVI-Cu/C) were successfully prepared from mill scale (MS) waste and spent lithium-ion battery (LIB) anode using carbothermic reduction as a new approach for the recycling and revalorization of these waste. Cu and graphite were obtained from the LIB anodes, while ZVI was provided by MS waste. ZVI-Cu/C were synthesized with different MS to LIB anode powers mass ratios (1 to 4) and used as catalysts for the degradation of 4-chlorophenol (4-CP) in water by both reduction and heterogeneous Fenton reactions. ZVI-Cu/C-2 showed the highest removal percentage of 4-CP in both reactions. The degradation rates fitted well to a pseudo first-order model for both reactions. Moreover, ZVI-Cu/C-2 catalyst showed a relatively low lixiviation of iron and copper ions and a high activity in the 4-CP removal even in the fourth reuse cycle, which supports the high stability of the synthesized catalyst. Hydroquinone and 4-chlorocatechol were identified as the main intermediate by-products of 4-CP degradation. The results of this study support the possibility of synthesizing high active and stable ZVI-Cu/C catalysts using graphite and copper from spent LIB anode and iron oxide from MS waste. These catalysts show promising prospective for the removal of 4-CP in water, with comparable activities to others previously reported. This study reports, for the first time, the combined recycling of MS waste and spent LIB anodes to synthesize ZVI-Cu/C catalysts for water treatment by both oxidation and reduction reactionsThis work was supported by China Scholarship Council (202008310005), National Natural Science Foundation of China (52070127), Science and Technology Commission of Shanghai Municipality (21WZ2501500

    Making industrial robots smarter with adaptive reasoning and autonomous thinking for real-time tasks in dynamic environments: a case study.

    Get PDF
    In order to extend the abilities of current robots in industrial applications towards more autonomous and flexible manufacturing, this work presents an integrated system comprising real-time sensing, path-planning and control of industrial robots to provide them with adaptive reasoning, autonomous thinking and environment interaction under dynamic and challenging conditions. The developed system consists of an intelligent motion planner for a 6 degrees-of-freedom robotic manipulator, which performs pick-and-place tasks according to an optimized path computed in real-time while avoiding a moving obstacle in the workspace. This moving obstacle is tracked by a sensing strategy based on machine vision, working on the HSV space for color detection in order to deal with changing conditions including non-uniform background, lighting reflections and shadows projection. The proposed machine vision is implemented by an off-board scheme with two low-cost cameras, where the second camera is aimed at solving the problem of vision obstruction when the robot invades the field of view of the main sensor. Real-time performance of the overall system has been experimentally tested, using a KUKA KR90 R3100 robot

    Smart sensing and adaptive reasoning for enabling industrial robots with interactive human-robot capabilities in dynamic environments: a case study.

    Get PDF
    Traditional industry is seeing an increasing demand for more autonomous and flexible manufacturing in unstructured settings, a shift away from the fixed, isolated workspaces where robots perform predefined actions repetitively. This work presents a case study in which a robotic manipulator, namely a KUKA KR90 R3100, is provided with smart sensing capabilities such as vision and adaptive reasoning for real-time collision avoidance and online path planning in dynamically-changing environments. A machine vision module based on low-cost cameras and color detection in the hue, saturation, value (HSV) space is developed to make the robot aware of its changing environment. Therefore, this vision allows the detection and localization of a randomly moving obstacle. Path correction to avoid collision avoidance for such obstacles with robotic manipulator is achieved by exploiting an adaptive path planning module along with a dedicated robot control module, where the three modules run simultaneously. These sensing/smart capabilities allow the smooth interactions between the robot and its dynamic environment, where the robot needs to react to dynamic changes through autonomous thinking and reasoning with the reaction times below the average human reaction time. The experimental results demonstrate that effective human-robot and robot-robot interactions can be realized through the innovative integration of emerging sensing techniques, efficient planning algorithms and systematic designs

    Investigation of computer vision techniques for automatic detection of mild cognitive impairment in the elderly

    No full text
    There are huge amounts of elderly people who suffer from cognitive impairment worldwide. Cognitive impairment can be divided into different stages such as mild cognitive impairment (MCI) and severe cognitive impairment like dementia. Its early detection can be of great importance. However, it is challenging to detect the cognitive impairment in the early stage with high accuracy and low cost, when most of the symptoms may not be fully expressed.;Although there have been some big changes and progresses in the field of detecting and diagnosing the cognitive impairment in recent years, all these existing techniques have their own weaknesses. Regarding the weaknesses of the existing techniques, both the traditional face to face cognitive tests and computer-based cognitive tests have the problems with diagnosing the mild cognitive impairment. More specifically, some personal information like age, education and personality will influence the test results and need to be taken into consideration carefully.;While the neuroimaging techniques are widely used in clinics, their major weakness is the high expenses required in the screening stage. Besides, the neuroimaging techniques are often used to diagnose the cognitive impairment only when the patients are found to have serious cognitive problems.;As a result, there is a pressing need to find alternative methods to detect the cognitive impairment in the early stage with high accuracy and low cost. In fact, some research works suggest that automatic facial expression recognition is promising in mental health care systems, as facial expressions can reflect people's mental state. Whilst viewing videos, studies have shown that the facial expressions of people with cognitive impairment exhibit abnormal corrugator activities compared to those without cognitive impairment. As a result, analysis of the facial expressions has the potential to detect the cognitive impairment.;In this thesis, a novel strategy for cognitive impairment detection is proposed, which is significantly different from the traditional methods like cognitive tests and neuroimaging techniques. The proposed strategy takes advantages of visual stimuli in the experiment and it mainly uses facial expressions and responses to detect the cognitive impairment when the participants are presentenced with the visual stimuli. As a result, this novel strategy for cognitive impairment detection with acceptable accuracy and low cost is achieved.;I present a novel deep convolution network-based system to detect the cognitive impairment in the early stage and support mental state diagnosis and detection. In the system, there are three important units in the proposed cognitive impairment detection system including the interface to arouse the facial expression, the proposed facial expression recognition algorithm and the algorithm to detect the cognitive impairment through the evolution of emotions. Among the cognitive impairment detection system, facial expression analysis is an important part. For facial expression analysis, this research presents a new solution in which the deep features are extracted from the Fully Connected Layer 6 of the AlexNet, with a standard Linear Discriminant Analysis Classifier exploited to train these deep features more efficiently.;The proposed algorithms are tested in 5 benchmarking databases: databases with limited images such as JAFFE, KDEF and CK+, and databases with images 'in the wild' such as FER2013 and AffectNet. Compared with the traditional methods and state-of-the-art methods proposed by other researchers, the algorithms have overall higher facial expression recognition accuracy. Also, in comparison to the state-of-the-art deep learning algorithms such as VGG16, GoogleNet, ResNet and AlexNet, the proposed method also has good recognition accuracy, much shorter operating time and lower device requirements.;In order to verify the system, I first made an experiment design. Then, clinical experiments were carried out in Shanghai, under the support from Dr Xia Li who is the chief physician in Mental Health Centre in Shanghai. After the recruitment procedure, a group of elderly people including cognitively impaired people and cognitively healthy people were invited to take part in the experiments. After the experiments in Shanghai, I classified the experiment data and used the proposed system to process the data.;I compared the major differences in the emotion evolution, including angry, happy, neutral and sad, between the cognitively impaired people and cognitively healthy people when they were watching the same video stimuli. In the selected testing group, the system had an overall cognitive impairment recognition accuracy of 66.7% using KNN classifier based on their evolutions of emotions.There are huge amounts of elderly people who suffer from cognitive impairment worldwide. Cognitive impairment can be divided into different stages such as mild cognitive impairment (MCI) and severe cognitive impairment like dementia. Its early detection can be of great importance. However, it is challenging to detect the cognitive impairment in the early stage with high accuracy and low cost, when most of the symptoms may not be fully expressed.;Although there have been some big changes and progresses in the field of detecting and diagnosing the cognitive impairment in recent years, all these existing techniques have their own weaknesses. Regarding the weaknesses of the existing techniques, both the traditional face to face cognitive tests and computer-based cognitive tests have the problems with diagnosing the mild cognitive impairment. More specifically, some personal information like age, education and personality will influence the test results and need to be taken into consideration carefully.;While the neuroimaging techniques are widely used in clinics, their major weakness is the high expenses required in the screening stage. Besides, the neuroimaging techniques are often used to diagnose the cognitive impairment only when the patients are found to have serious cognitive problems.;As a result, there is a pressing need to find alternative methods to detect the cognitive impairment in the early stage with high accuracy and low cost. In fact, some research works suggest that automatic facial expression recognition is promising in mental health care systems, as facial expressions can reflect people's mental state. Whilst viewing videos, studies have shown that the facial expressions of people with cognitive impairment exhibit abnormal corrugator activities compared to those without cognitive impairment. As a result, analysis of the facial expressions has the potential to detect the cognitive impairment.;In this thesis, a novel strategy for cognitive impairment detection is proposed, which is significantly different from the traditional methods like cognitive tests and neuroimaging techniques. The proposed strategy takes advantages of visual stimuli in the experiment and it mainly uses facial expressions and responses to detect the cognitive impairment when the participants are presentenced with the visual stimuli. As a result, this novel strategy for cognitive impairment detection with acceptable accuracy and low cost is achieved.;I present a novel deep convolution network-based system to detect the cognitive impairment in the early stage and support mental state diagnosis and detection. In the system, there are three important units in the proposed cognitive impairment detection system including the interface to arouse the facial expression, the proposed facial expression recognition algorithm and the algorithm to detect the cognitive impairment through the evolution of emotions. Among the cognitive impairment detection system, facial expression analysis is an important part. For facial expression analysis, this research presents a new solution in which the deep features are extracted from the Fully Connected Layer 6 of the AlexNet, with a standard Linear Discriminant Analysis Classifier exploited to train these deep features more efficiently.;The proposed algorithms are tested in 5 benchmarking databases: databases with limited images such as JAFFE, KDEF and CK+, and databases with images 'in the wild' such as FER2013 and AffectNet. Compared with the traditional methods and state-of-the-art methods proposed by other researchers, the algorithms have overall higher facial expression recognition accuracy. Also, in comparison to the state-of-the-art deep learning algorithms such as VGG16, GoogleNet, ResNet and AlexNet, the proposed method also has good recognition accuracy, much shorter operating time and lower device requirements.;In order to verify the system, I first made an experiment design. Then, clinical experiments were carried out in Shanghai, under the support from Dr Xia Li who is the chief physician in Mental Health Centre in Shanghai. After the recruitment procedure, a group of elderly people including cognitively impaired people and cognitively healthy people were invited to take part in the experiments. After the experiments in Shanghai, I classified the experiment data and used the proposed system to process the data.;I compared the major differences in the emotion evolution, including angry, happy, neutral and sad, between the cognitively impaired people and cognitively healthy people when they were watching the same video stimuli. In the selected testing group, the system had an overall cognitive impairment recognition accuracy of 66.7% using KNN classifier based on their evolutions of emotions

    Image enhancement and corrosion detection for UAV visual inspection of pressure vessels

    No full text
    The condition of a pressure vessel is normally checked by human operators, which have health and safety risks as well as low working efficiency and high inspection cost. Visual inspection for pressure vessels can be done by an Unmanned Aerial Vehicle (UAV) with the sensing module. Image enhancement techniques and image processing techniques are vital in the UAV inspection of pressure vessels. However, there are several issues to be overcome in the UAV visual inspection of pressure vessels. The images captured by the UAV are of low quality under the cluttered environment due to poor lighting, noises and vibrations caused by the UAV. In this research, a system is developed for UAV visual inspection of pressure vessels using image processing and image enhancement techniques. In the developed system, the input image is captured by the UAV first. Next, efficient image enhancement techniques are applied to the images in order to enhance image qualities. After that, the corrosion part is detected and the percentage of the corrosion area in the entire image is measured. The proposed system has the potential to be implemented for the autonomous correction detection with the image enhancement techniques in UAV visual inspection for the pressure vessels
    corecore